-
Notifications
You must be signed in to change notification settings - Fork 117
perf(levm): reuse stack pool #5179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Lines of code reportTotal lines added: Detailed view |
Benchmark Results ComparisonNo significant difference was registered for any benchmark run. Detailed ResultsBenchmark Results: BubbleSort
Benchmark Results: ERC20Approval
Benchmark Results: ERC20Mint
Benchmark Results: ERC20Transfer
Benchmark Results: Factorial
Benchmark Results: FactorialRecursive
Benchmark Results: Fibonacci
Benchmark Results: FibonacciRecursive
Benchmark Results: ManyHashes
Benchmark Results: MstoreBench
Benchmark Results: Push
Benchmark Results: SstoreBench_no_opt
|
b9855fe to
b3b636e
Compare
Benchmark Block Execution Results Comparison Against Main
|
| std::mem::swap(&mut vm.stack_pool, stack_pool); | ||
| let result = vm.execute().map_err(VMError::into); | ||
| std::mem::swap(&mut vm.stack_pool, stack_pool); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to use std::mem::take at the beggining and then take. We don't need to actually swap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a stack pool optimization for transaction execution within blocks to improve performance by reusing stack allocations across multiple transactions.
- Adds a new
execute_tx_in_blockfunction that accepts a mutable stack pool parameter for reuse across transaction executions - Updates
execute_block_pipelineto create and maintain a shared stack pool throughout block execution - Swaps the stack pool with the VM before and after transaction execution to enable stack reuse
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pub fn execute_tx_in_block( | ||
| // The transaction to execute. | ||
| tx: &Transaction, | ||
| // The transactions recovered address | ||
| tx_sender: Address, | ||
| // The block header for the current block. | ||
| block_header: &BlockHeader, | ||
| db: &mut GeneralizedDatabase, | ||
| vm_type: VMType, | ||
| stack_pool: &mut Vec<Stack>, | ||
| ) -> Result<ExecutionReport, EvmError> { |
Copilot
AI
Nov 6, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new public function is missing a documentation comment. Consider adding a doc comment explaining the purpose of this function and how it differs from execute_tx, particularly the role of the stack_pool parameter in optimizing stack allocations across multiple transaction executions within a block.
| std::mem::swap(&mut vm.stack_pool, stack_pool); | ||
| let result = vm.execute().map_err(VMError::into); | ||
| std::mem::swap(&mut vm.stack_pool, stack_pool); |
Copilot
AI
Nov 6, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If vm.execute() panics, the second std::mem::swap won't execute, potentially leaving the stack_pool in an inconsistent state. Consider using a guard pattern or RAII wrapper to ensure the swap-back happens even in case of panics, or document that panic safety is not guaranteed here.
| std::mem::swap(&mut vm.stack_pool, stack_pool); | |
| let result = vm.execute().map_err(VMError::into); | |
| std::mem::swap(&mut vm.stack_pool, stack_pool); | |
| struct StackPoolGuard<'a> { | |
| a: &'a mut Vec<Stack>, | |
| b: &'a mut Vec<Stack>, | |
| } | |
| impl<'a> Drop for StackPoolGuard<'a> { | |
| fn drop(&mut self) { | |
| std::mem::swap(self.a, self.b); | |
| } | |
| } | |
| std::mem::swap(&mut vm.stack_pool, stack_pool); | |
| let _guard = StackPoolGuard { a: &mut vm.stack_pool, b: stack_pool }; | |
| let result = vm.execute().map_err(VMError::into); |
| ) -> Result<BlockExecutionResult, EvmError> { | ||
| Self::prepare_block(block, db, vm_type)?; | ||
|
|
||
| let mut shared_stack_pool = Vec::with_capacity(1024); |
Copilot
AI
Nov 6, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The magic number 1024 for the stack pool capacity is not explained. Consider either adding a comment explaining why this specific capacity was chosen, or extracting it as a named constant (e.g., STACK_POOL_CAPACITY) to improve code readability and maintainability.
| pub fn execute_tx_in_block( | ||
| // The transaction to execute. | ||
| tx: &Transaction, | ||
| // The transactions recovered address |
Copilot
AI
Nov 6, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Grammar issue in comment: "The transactions recovered address" should be "The transaction's recovered address" (possessive form).
Motivation
Currently, allocation of a new stack object is taking ~8% of execution.
Description
We make a single allocation of every layer.